12 research outputs found

    Fast Monte-Carlo Localization on Aerial Vehicles using Approximate Continuous Belief Representations

    Full text link
    Size, weight, and power constrained platforms impose constraints on computational resources that introduce unique challenges in implementing localization algorithms. We present a framework to perform fast localization on such platforms enabled by the compressive capabilities of Gaussian Mixture Model representations of point cloud data. Given raw structural data from a depth sensor and pitch and roll estimates from an on-board attitude reference system, a multi-hypothesis particle filter localizes the vehicle by exploiting the likelihood of the data originating from the mixture model. We demonstrate analysis of this likelihood in the vicinity of the ground truth pose and detail its utilization in a particle filter-based vehicle localization strategy, and later present results of real-time implementations on a desktop system and an off-the-shelf embedded platform that outperform localization results from running a state-of-the-art algorithm on the same environment

    Vision and Learning for Deliberative Monocular Cluttered Flight

    Full text link
    Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available

    Online Inference of Joint Occupancy using Forward Sensor Models and Trajectory Posteriors for Deliberate Robot Navigation

    No full text
    Robotic navigation algorithms for real-world robots require dense and accurate probabilistic volumetric representations of the environment in order to traverse efficiently. Sensor data in a Simultaneous Localisation And Mapping (SLAM)context, however, always has associated acquisition noise and pose uncertainty, and encoding this within the map representation while still maintaining computationaltractability is a key challenge in deploying these systems outside of controlled laboratory settings. The occupancy inference problem is essentially a high dimensional searchin the space of all maps. By incorporating the physics of sensor formation using forward models, it is possible to reason in terms of the likelihood of the measurements for a given map hypothesis to obtain a solution that explainsthe noisy observations as well as possible. However, this approach to mapping has historically been prohibitively expensive to compute in real-time. Thus, conventional robotic mapping algorithms have primarily chosen to work with limiting assumptions to maintain tractability.In this thesis we present a framework that explicitly reasons about the conditional dependence imposed on the occupancy of voxels traversed by each ray of a depth camera as a Markov Random Field (MRF). The tight intra-and inter-ray coupling explicitly incorporates conditional dependence of the occupancy of individual voxels as opposed to making independent log-odds Bayes filterupdates as conventional occupancy maps do. Visibility constraints imposed by using a forward sensor model enables simplification of the otherwise high dimensional inference. The forward model allows incorporating learnt sensor noise characteristics for accurate inference. Instead of marginalising sensor data immediately, data from camera poses is retained, and can be added, moved, or removed in an ad-hoc fashion whilst performing inference. In order to avoid prohibitive sensor data storage costs, an extension to using the framework in a submapping setting with pose graphs is presented with sensor data marginalisationdeferred until as late as possible. Marginalisation is performed using succinct parametric Gaussian distribution representations. Finally, Gaussian mixture model map representations are then demonstrated to be capable ofproviding robust localisation in multi-hypothesis settings. All of this is made real-time feasible by the inherent parallelisability of the proposed framework and is implemented on GPUs.</div

    Learning Monocular Reactive UAV Control in Cluttered Natural Environments

    No full text
    Abstract—Autonomous navigation for large Unmanned Aerial Vehicles (UAVs) is fairly straight-forward, as expensive sensors and monitoring devices can be employed. In contrast, obstacle avoidance remains a challenging task for Micro Aerial Vehicles (MAVs) which operate at low altitude in cluttered environments. Unlike large vehicles, MAVs can only carry very light sensors, such as cameras, making autonomous navigation through obstacles much more challenging. In this paper, we describe a system that navigates a small quadrotor helicopter autonomously at low altitude through natural forest environments. Using only a single cheap camera to perceive the environment, we are able to maintain a constant velocity of up to 1.5m/s. Given a small set of human pilot demonstrations, we use recent state-of-theart imitation learning techniques to train a controller that can avoid trees by adapting the MAVs heading. We demonstrate the performance of our system in a more controlled environment indoors, and in real natural forest environments outdoors. I

    Vision and Learning for Deliberative Monocular Cluttered Flight

    No full text
    Abstract — Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via rel-evant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available. I

    Vision and Learning for Deliberative Monocular Cluttered Flight

    No full text
    <p>Cameras provide a rich source of information while being passive, cheap and lightweight for small and medium Unmanned Aerial Vehicles (UAVs). In this work we present the first implementation of receding horizon control, which is widely used in ground vehicles, with monocular vision as the only sensing mode for autonomous UAV flight in dense clutter. We make it feasible on UAVs via a number of contributions: novel coupling of perception and control via relevant and diverse, multiple interpretations of the scene around the robot, leveraging recent advances in machine learning to showcase anytime budgeted cost-sensitive feature selection, and fast non-linear regression for monocular depth prediction. We empirically demonstrate the efficacy of our novel pipeline via real world experiments of more than 2 kms through dense trees with a quadrotor built from off-the-shelf parts. Moreover our pipeline is designed to combine information from other modalities like stereo and lidar as well if available</p
    corecore